OpenShift Origin 3.6 : Install
2017/11/23 |
Install OpenShift Origin which is the Open Source version of Red Hat OpenShift.
This example is based on the environment like follows. -----------+-----------------------------------------------------------+------------ |10.0.0.30 |10.0.0.51 |10.0.0.52 +----------+-----------+ +----------+-----------+ +----------+-----------+ | [ dlp.srv.world ] | | [ node01.srv.world ] | | [ node02.srv.world ] | | (Master Node) | | (Compute Node) | | (Compute Node) | | (Compute Node) | | | | | +----------------------+ +----------------------+ +----------------------+ |
There are some System requirements to configure cluster.
* Master node has up to 16G memory. * On all nodes, free space on physical volume is required to create a new volume group for Docker Direct LVM. |
|
[1] | On All Nodes, Create a user for installation to be used in Ansible and also grant root privileges to him. |
[root@dlp ~]#
useradd origin [root@dlp ~]# passwd origin [root@dlp ~]# echo -e 'Defaults:origin !requiretty\norigin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift [root@dlp ~]# chmod 440 /etc/sudoers.d/openshift # if Firewalld is running, allow SSH [root@dlp ~]# firewall-cmd --add-service=ssh --permanent [root@dlp ~]# firewall-cmd --reload |
[2] | On All Nodes, install OpenShift Origin 3.6 repository and Docker. Next, create a volume group for Docker Direct LVM to setup LVM Thinpool like follows. |
[root@dlp ~]#
[root@dlp ~]# yum -y install centos-release-openshift-origin36 docker vgcreate vg_origin01 /dev/sdb1 Volume group "vg_origin01" successfully created [root@dlp ~]# echo VG=vg_origin01 >> /etc/sysconfig/docker-storage-setup [root@dlp ~]# systemctl start docker [root@dlp ~]# systemctl enable docker |
[3] | On Master Node, login with a user created above and set SSH keypair with no pass-phrase. |
[origin@dlp ~]$ ssh-keygen -q -N "" Enter file in which to save the key (/home/origin/.ssh/id_rsa):
[origin@dlp ~]$
vi ~/.ssh/config # create new ( define nodes ) Host dlp Hostname dlp.srv.world User origin Host node01 Hostname node01.srv.world User origin Host node02 Hostname node02.srv.world User origin
[origin@dlp ~]$
chmod 600 ~/.ssh/config # transfer public-key to another node [origin@dlp ~]$ ssh-copy-id node01 origin@node01.srv.world's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node01'" and check to make sure that only the key(s) you wanted were added.[origin@dlp ~]$ ssh-copy-id node02 |
[4] | On Master Node, login with a user created above and run Ansible Playbook for setting up OpenShift Cluster. |
# add follows to the end [OSEv3:children] masters nodes [OSEv3:vars] # admin user created in previous section ansible_ssh_user=origin ansible_become=true openshift_deployment_type=origin # use HTPasswd for authentication openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/.htpasswd'}] openshift_master_default_subdomain=apps.srv.world # allow unencrypted connection within cluster openshift_docker_insecure_registries=172.30.0.0/16 [masters] dlp.srv.world openshift_schedulable=true containerized=false [etcd] dlp.srv.world [nodes] # set labels [region: ***, zone: ***] (any name you like) dlp.srv.world openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node01.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_schedulable=true node02.srv.world openshift_node_labels="{'region': 'primary', 'zone': 'west'}" openshift_schedulable=true ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml 2017-11-23 19:17:14,120 p=1889 u=root | PLAY [Create initial host groups for localhost] 2017-11-23 19:17:14,127 p=1889 u=root | TASK [include_vars] **************************** ................ ................ PLAY RECAP ********************************************************************* dlp.srv.world : ok=643 changed=177 unreachable=0 failed=0 localhost : ok=12 changed=0 unreachable=0 failed=0 node01.srv.world : ok=246 changed=66 unreachable=0 failed=0 node02.srv.world : ok=246 changed=66 unreachable=0 failed=0 # show state [origin@dlp ~]$ oc get nodes NAME STATUS AGE VERSION dlp.srv.world Ready 19m v1.6.1+5115d708d7 node01.srv.world Ready 19m v1.6.1+5115d708d7 node02.srv.world Ready 19m v1.6.1+5115d708d7 # show state with labels [origin@dlp ~]$ oc get nodes --show-labels=true NAME STATUS AGE VERSION LABELS dlp.srv.world Ready 28m v1.6.1+... beta.kuber...hostname=dlp.srv.world,region=infra,zone=default node01.srv.world Ready 28m v1.6.1+... beta.kuber...hostname=node01.srv.world,region=primary,zone=east node02.srv.world Ready 28m v1.6.1+... beta.kuber...hostname=node02.srv.world,region=primary,zone=west |